Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Relay selection strategy for cache-aided full-duplex simultaneous wireless information and power transfer system
SHI Anni, LI Taoshen, WANG Zhe, HE Lu
Journal of Computer Applications    2021, 41 (6): 1539-1545.   DOI: 10.11772/j.issn.1001-9081.2020121930
Abstract301)      PDF (1136KB)(453)       Save
In order to improve the performance of the Simultaneous Wireless Information and Power Transfer (SWIPT) system, a new cache-aided full-duplex relay collaborative system model was constructed, and the free Energy Access Points (EAPs) were considered as the extra energy supplement of relay nodes in the system. For the system throughput optimization problem, a new SWIPT relay selection strategy based on power allocation cooperation was proposed. Firstly, a problem model on the basis of the constraints such as communication service quality and source node transmit power was established. Secondly, the original nonlinear mixed integer programming problem was transformed into a pair of coupling optimization problems through mathematical transformation. Finally, the Karush-Kuhn-Tucker (KKT) condition was used to solve the internal optimization problem with the help of Lagrange function, so that the closed-form solution of the power allocation factor and the relay transmit power was obtained, and the external optimization problem was solved based on this result, so as to select the best relay for the cooperative communication. The simulation experimental results show that, the free EAPs and the configuration of cache for the relay are feasible and effective, and the proposed system is significantly better than the traditional relay cooperative communication systems in terms of throughput gain.
Reference | Related Articles | Metrics
Video abnormal behavior detection based on dual prediction model of appearance and motion features
LI Ziqiang, WANG Zhengyong, CHEN Honggang, LI Linyi, HE Xiaohai
Journal of Computer Applications    2021, 41 (10): 2997-3003.   DOI: 10.11772/j.issn.1001-9081.2020121906
Abstract369)      PDF (1399KB)(412)       Save
In order to make full use of appearance and motion information in video abnormal behavior detection, a Siamese network model that can capture appearance and motion information at the same time was proposed. The two branches of the network were composed of the same autoencoder structure. Several consecutive frames of RGB images were used as the input of the appearance sub-network to predict the next frame, while RGB frame difference image was used as the input of the motion sub-network to predict the future frame difference. In addition, considering one of the reasons that affected the detection effect of the prediction-based method, that is the diversity of normal samples, and the powerful "generation" ability of the autoencoder network, that is it has a good prediction effect on some abnormal samples. Therefore, a memory enhancement module that learns and stores the "prototype" features of normal samples was added between the encoder and the decoder, so that the abnormal samples were able to obtain greater prediction error. Extensive experiments were conducted on three public anomaly detection datasets Avenue, UCSD-ped2 and ShanghaiTech. Experimental results show that, compared with other video abnormal behavior detection methods based on reconstruction or prediction, the proposed method achieves better performance. Specifically, the average Area Under Curve (AUC) of the proposed method on Avenue, UCSD-ped2 and ShanghaiTech datasets reach 88.2%, 97.5% and 73.0% respectively.
Reference | Related Articles | Metrics
Social recommendation based on dynamic integration of social information
REN Kezhou, PENG Furong, GUO Xin, WANG Zhe, ZHANG Xiaojing
Journal of Computer Applications    2021, 41 (10): 2806-2812.   DOI: 10.11772/j.issn.1001-9081.2020111892
Abstract350)      PDF (728KB)(401)       Save
Aiming at the problem of data sparseness in recommendation algorithms, social data are usually introduced as auxiliary information for social recommendation. The traditional social recommendation algorithms ignore users' interest transfer, which makes the model unable to describe the dynamic characteristics of user interests, and the algorithms also ignore the dynamic characteristics of social influences, which causes the model to treat long before social behaviors and recent social behaviors equally. Aiming at these two problems, a social recommendation model named SLSRec with dynamic integration of social information was proposed. First, self-attention mechanism was used to construct a sequence model of user interaction items to implement the dynamic description of user interests. Then, an attention mechanism with forgetting with time was designed to model the short-term social interests, and an attention mechanism with collaborative characteristics was designed to model long-term social interests. Finally, the long-term and short-term social interests and the user's short-term interests were combined to obtain the user's final interests and generate the next recommendation. Normalized Discounted Cumulative Gain (NDCG) and Hit Rate (HR) indicators were used to compare and verify the proposed model, the sequence recommendation models (Self-Attention Sequence Recommendation (SASRec) model) and the social recommendation model (neural influence Diffusion Network for social recommendation (DiffNet) model) on the sparse dataset brightkite and the dense dataset Last.FM. Experimental results show that compared with DiffNet model, SLSRec model has the HR index increased by 8.5% on the sparse dataset; compared with SASRec model, SLSRec model has the NDCG index increased by 2.1% on the dense dataset, indicating that considering the dynamic characteristics of social information makes the recommendation results more accurate.
Reference | Related Articles | Metrics
Ultra-short-term wind power prediction based on empirical mode decomposition and multi-branch neural network
MENG Xinyu, WANG Ruihan, ZHANG Xiping, WANG Mingjie, QIU Gang, WANG Zhengxia
Journal of Computer Applications    2021, 41 (1): 237-242.   DOI: 10.11772/j.issn.1001-9081.2020060930
Abstract528)      PDF (1078KB)(678)       Save
Wind power prediction is an important basis for the monitoring and information management of wind farms. Ultra-short-term wind power prediction is often used to balance load and optimize scheduling and requires high prediction accuracy. Due to the complex environment of wind farm and many uncertainties of wind speed, the wind power time series signals are often non-stationary and random. Recurrent Neural Network (RNN) is suitable for time series tasks, but the non-periodic and non-stationary time series signals will increase the difficulty of network learning. To overcome the interference of non-stationary signal in the prediction task and improve the prediction accuracy of wind power, an ultra-short-term wind power prediction method combining empirical model decomposition and multi-branch neural network was proposed. Firstly, the original wind power time series signal was decomposed by Empirical Mode Decomposition (EMD) to reconstruct the data tensor. Then, the convolution layer and Gated Recurrent Unit (GRU) layer were used to extract the local features and trend features respectively. Finally, the prediction results were obtained through feature fusion and full connection layer. Experimental results on the dataset of a wind farm in Inner Mongolia show that compared with AutoRegressive Integrated Moving Average (ARIMA) model, the proposed method improves the prediction accuracy by nearly 30%, which verifies the effectiveness of the proposed method.
Reference | Related Articles | Metrics
Target detection of carrier-based aircraft based on deep convolutional neural network
ZHU Xingdong, TIAN Shaobing, HUANG Kui, FAN Jiali, WANG Zheng, CHENG Huacheng
Journal of Computer Applications    2020, 40 (5): 1529-1533.   DOI: 10.11772/j.issn.1001-9081.2019091694
Abstract383)      PDF (823KB)(376)       Save

The carrier-based aircrafts on the carrier deck are dense and occluded, so that the carrier-based aircraft targets are difficult to detect, and the detection effect is easily affected by the lighting condition and target size. Therefore, an improved Faster R-CNN (Faster Region with Convolutional Neural Network) carrier-based aircraft target detection method was proposed. In this method, a loss function with a repulsion loss strategy was designed, and combined with multi-scale training, pictures collected under laboratory condition were used to train and test the deep convolutional neural network. Test experiments show that compared with the original Faster R-CNN detection model, the improved model has a better detection effect on occluded aircraft targets, the recall increased by 7 percentage points, and the precision increased by 6 percentage points. The experimental results show that the proposed improved method can automatically and comprehensively extract the characteristics of carrier-based aircraft targets, solve the detection problem of occluded carrier-based aircraft targets, has the detection accuracy and speed which can meet the actual needs, and has strong adaptability and high robustness under different lighting conditions and target sizes.

Reference | Related Articles | Metrics
Magnetic resonance image segmentation of articular synovium based on improved U-Net
WEI Xiaona, XING Jiaqi, WANG Zhenyu, WANG Yingshan, SHI Jie, ZHAO Di, WANG Hongzhi
Journal of Computer Applications    2020, 40 (11): 3340-3345.   DOI: 10.11772/j.issn.1001-9081.2020030390
Abstract345)      PDF (901KB)(565)       Save
In order to accurately diagnose the synovitis patient's condition, doctors mainly rely on manual labeling and outlining method to extract synovial hyperplasia areas in the Magnetic Resonance Image (MRI). This method is time-consuming and inefficient, has certain subjectivity and is of low utilization rate of image information. To solve this problem, a new articular synovium segmentation algorithm, named 2D ResU-net segmentation algorithm was proposed. Firstly, the two-layer residual block in the Residual Network (ResNet) was integrated into the U-Net to construct the 2D ResU-net. Secondly, the sample dataset was divided into training set and testing set, and data augmentation was performed to the training set. Finally, all the training samples after augmentation were applied to the training of the network model. In order to test the segmentation effect of the model, the tomographic images containing synovitis in the testing set were selected for segmentation test. The final average segmentation accuracy indexes are as follow:Dice Similarity Coefficient (DSC) of 69.98%, IOU (Intersection over Union) index of 79.90% and Volumetric Overlap Error (VOE)of 12.11%. Compared with U-Net algorithm, 2D ResU-net algorithm has the DSC increased by 10.72%, IOU index increased by 4.24% and VOE decreased by 11.57%. Experimental results show that this algorithm can achieve better segmentation effect of synovial hyperplasia areas in MRI images, and can assist doctors to make diagnosis of the disease condition in time.
Reference | Related Articles | Metrics
Methods of training data augmentation for medical image artificial intelligence aided diagnosis
WEI Xiaona, LI Yinghao, WANG Zhenyu, LI Haozun, WANG Hongzhi
Journal of Computer Applications    2019, 39 (9): 2558-2567.   DOI: 10.11772/j.issn.1001-9081.2019030450
Abstract464)      PDF (1697KB)(631)       Save

For the problem of time, effort and money consuming to obtain a large number of samples by conventional means faced by Artificial Intelligence (AI) application research in different fields, a variety of sample augmentation methods have been proposed in many AI research fields. Firstly, the research background and significance of data augmentation were introduced. Then, the methods of data augmentation in several common fields (including natural image recognition, character recognition and discourse parsing) were summarized, and on this basis, a detailed overview of sample acquisition or augmentation methods in the field of medical image assisted diagnosis was provided, including X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI) images. Finally, the key issues of data augmentation methods in AI application fields were summarized and the future development trends were prospected. It can be concluded that obtaining a sufficient number of broadly representative training samples is the key to the research and development of all AI fields. Both the common fields and the professional fields have conducted sample augmentation, and different fields or even different research directions in the same field have different sample acquisition or augmentation methods. In addition, sample augmentation is not simply to increase the number of samples, but to reproduce the existence of real samples that cannot be completely covered by small sample size as far as possible, so as to improve sample diversity and enhance AI system performance.

Reference | Related Articles | Metrics
YOLO network character recognition method with variable candidate box density for international phonetic alphabet
ZHENG Yi, QI Donglian, WANG Zhenyu
Journal of Computer Applications    2019, 39 (6): 1675-1679.   DOI: 10.11772/j.issn.1001-9081.2018112361
Abstract366)      PDF (730KB)(266)       Save
Aiming at the low recognition accuracy and poor practicability of the traditional character feature extraction methods to International Phonetic Alphabet (IPA), a You Only Look Once (YOLO) network character recognition method with variable candidate box density for IPA was proposed. Firstly, based on YOLO network and combined with three characteristics such as the characters of IPA are closely arranged on X-axis direction and have various types and forms, the distribution density of candidate box in YOLO network was changed. Then, with the distribution density of candidate box on the X-axis increased while the distribution density of candidate box on the Y-axis reduced, YOLO-IPA network was constructed. The proposed method was tested on the IPA dataset collected from Chinese Dialect Vocabulary with 1360 images of 72 categories. The experimental results show that, the proposed method has the recognition rate of 93.72% for large characters and 89.31% for small characters. Compared with the traditional character recognition algorithms, the proposed method greatly improves the recognition accuracy. Meanwhile, the detection speed was improved to less than 1 s in the experimental environment. Therefore, the proposed method can meet the need of real-time application.
Reference | Related Articles | Metrics
Robust multi-manifold discriminant local graph embedding based on maximum margin criterion
YANG Yang, WANG Zhengqun, XU Chunlin, YAN Chen, JU Ling
Journal of Computer Applications    2019, 39 (5): 1453-1458.   DOI: 10.11772/j.issn.1001-9081.2018102113
Abstract394)      PDF (900KB)(261)       Save
In most existing multi-manifold face recognition algorithms, the original data with noise are directly processed, but the noisy data often have a negative impact on the accuracy of the algorithm. In order to solve the problem, a Robust Multi-Manifold Discriminant Local Graph Embedding algorithm based on the Maximum Margin Criterion (RMMDLGE/MMC) was proposed. Firstly, a denoising projection was introduced to process the original data for iterative noise reduction, and the purer data were extracted. Secondly, the data image was divided into blocks and a multi-manifold model was established. Thirdly, combined with the idea of maximum margin criterion, an optimal projection matrix was sought to maximize the sample distances on different manifolds while to minimize the sample distances on the same manifold. Finally, the distance from the test sample manifold to the training sample manifold was calculated for classification and identification. The experimental results show that, compared with Multi-Manifold Local Graph Embedding algorithm based on the Maximum Margin Criterion (MLGE/MMC) which performs well, the classification recognition rate of the proposed algorithm is improved by 1.04, 1.28 and 2.13 percentage points respectively on ORL, Yale and FERET database with noise and the classification effect is obviously improved.
Reference | Related Articles | Metrics
Brain network analysis method based on feature vector of electroencephalograph subsequence
YANG Xiong, YAO Rong, YANG Pengfei, WANG Zhe, LI Haifang
Journal of Computer Applications    2019, 39 (4): 1224-1228.   DOI: 10.11772/j.issn.1001-9081.2018092037
Abstract441)      PDF (819KB)(233)       Save
Working memory complex network analysis methods mostly use channels as nodes to analyze from the perspective of space, while rarely analyze channel networks from the perspective of time. Focused on the high time resolution characteristics of ElectroEncephaloGraph (EEG) and the difficulty of time series segmentation, a method of constructing and analyzing network from the time perspective was proposed. Firstly, the microstate was used to divide EEG signal of each channel into different sub-segments as nodes of the network. Secondly, the effective features in the sub-segments were extracted and selected as the sub-segment effective features, and the correlation between sub-segment feature vectors was calculated to construct channel time sequence complex network. Finally, the attributes and similarity analysis of the constructed network were analyzed and verified on the schizophrenic EEG data. The experimental results show that the analysis of schizophrenia data by the proposed method can make full use of the time characteristics of EEG signals to understand the characteristics of time series channel network constructed in working memory of patients with schizophrenia from a time perspective, and explain the significant differences between patients and normals.
Reference | Related Articles | Metrics
Low-density 3D model information hiding algorithm based on multple fusion states
REN Shuai, XU Zhenchao, WANG Zhen, HE Yuan, ZHANG Tao, SU Dongxu, MU Dejun
Journal of Computer Applications    2019, 39 (4): 1100-1105.   DOI: 10.11772/j.issn.1001-9081.2018091855
Abstract549)      PDF (929KB)(226)       Save
Aiming at the problem that the existing 3D model information hiding algorithms cannot effectively resist uneven compression, a multi-carrier low-density information hiding algorithm based on multiple fusion states was proposed. Firstly, multiple 3D models were positioned, oriented and stereotyped by translation and scaling. Secondly, the 3D models were rotated at different angles and merged by using the center point as merging point to obtain multiple fusion states. Thirdly, local height and Mean Shift clustering analysis were used to divide the energy of the vertices of the fusion state model, obtaining the vertices with different energies. Finally, by changing the vertex coordinates, the secret information changed by Arnold scrambling was quickly hidden in multiple fusion states and 3D models. Experimental results show that the proposed algorithm is robust against uneven compression attacks and has high invisibility.
Reference | Related Articles | Metrics
Address hopping proactive defense model in IPv6 based on sliding time window
KONG Yazhou, ZHANG Liancheng, WANG Zhenxing
Journal of Computer Applications    2018, 38 (7): 1936-1940.   DOI: 10.11772/j.issn.1001-9081.2018010073
Abstract440)      PDF (924KB)(293)       Save
Aiming at the problem that IPv6 nodes are easily under probing attack by an attacker while end-to-end communication is restored in the IPv6 network, a proactive defense model of Address Hopping based on Sliding Time Window in IPv6 (AHSTW) was proposed. Session parameters such as the address hopping interval were firstly negotiated by using the shared key, and then the concept of sending and receiving time window was introduced. The two communication parties sent or received only the packets in the time window, through a Time Window Adaptive Adjustment (TWAA) algorithm. According to the change of network delay, the time window could be adjusted in time to adapt to the changes of the network environment. The theoretical analysis shows that the proposed model can effectively resist the data interception attacks and Denial of Service (DoS) attacks on the target IPv6 nodes. The experimental results show that in the transmission of the same data packet size, the extra CPU overhead of AHSTW model is to 2-5 percentage points, with no significant increase in communication cost and no significant decline in communication efficiency. The addresses and ports of two communication parties are random, decentralized, out of order and so on, which greatly improves the cost and difficulty of attackers and protects the network security of IPv6.
Reference | Related Articles | Metrics
Information hiding algorithm for 3D models based on feature point labeling and clustering
REN Shuai, ZHANG Tao, XU Zhenchao, WANG Zhen, HE Yuan, LIU Yunong
Journal of Computer Applications    2018, 38 (4): 1017-1022.   DOI: 10.11772/j.issn.1001-9081.2017092348
Abstract375)      PDF (994KB)(318)       Save
Aiming at the problem that some 3D model-based information hiding algorithms are incompetent against combined attacks, a new strategy based on feature point labeling and clustering was proposed. Firstly, edge folding was adopted to achieve mesh simplification and all the vertexes were labeled in order by their energy level. Secondly, the ordered vertexes were clustered and re-ordered by using local height theory and Mean Shift clustering analysis. Lastly, hidden information and cover model carrier information were optimized, matched and modified by Logistic chaos mapping scrambling and genetic algorithm, completing the final hiding. The data in hiding areas were labeled and screened locally and globally according to the energy weight, which is good for the robustness and transparency of the algorithm. The experimental results show that, compared with 3D information hiding algorithms based on inscribed sphere and outer skeleton, the robustness of the proposed algorithm against single or joint attacks is significantly improved, and it also has the same degree of invisibility.
Reference | Related Articles | Metrics
Stability analysis of interactive development between manufacturing enterprise and logistics enterprise based on Logistic-Volterra model
WANG Zhenzhen, WU Yingjie
Journal of Computer Applications    2018, 38 (2): 589-595.   DOI: 10.11772/j.issn.1001-9081.2017082011
Abstract431)      PDF (1120KB)(342)       Save
The traditional literatures mainly consider the cooperative relationship while neglecting the competitive relationship between manufacturing and logistics enterprises during interactive development. An improved model, namely Logistic-Volterra model, was proposed based on the traditional Logistic model, which considered the contribution coefficients and competition coefficients at the same time. Firstly, the Logistic-Volterra model was built and the stability solution was sovled, then the mathematical conditions for achieving stability and the interpretation of reality were discussed. Secondly, the affecting factors on the interactive development of manufacturing and logistics enterprises were discovered by using Matlab numerical simulation, and the differences between the improved model and traditional model were also discussed. Finally, the manufacturing enterprise A and logistics enterprise B were taken as an example to analyze the competitive behavior in the process of cooperation; furthermore, the impact of coopetition behavior on the interests was also analyzed. The theoretical analysis and simulation results show that the stability of the system is highly affected by contribution coefficient, competition coefficient and environmental capability, the result is more reasonable when considering the competition relationship in the model. It means that manufacturing and logistics enterprises should fully address the effects of competition on both sides.
Reference | Related Articles | Metrics
UML profile for modeling ERP cloud service platform based on SaaS
WANG Zhen, JIANG Zheyuan
Journal of Computer Applications    2017, 37 (7): 2027-2033.   DOI: 10.11772/j.issn.1001-9081.2017.07.2027
Abstract805)      PDF (1038KB)(420)       Save
Since the traditional Enterprise Resource Planning (ERP) system has low openness, low expansibility and high cost in current business environment, an ERP system modeling method based on Software-as-a-Service (SaaS) model was proposed. Firstly, a new primitive set called Unified Modeling Language (UML) profile was gotten by extending the primitives of UML. Secondly, an equivalent meta model was established and semantic unambiguity was ensured by Object Constraint Language (OCL). Finally, the cloud ERP was described by the model framework which was composed of application diagram, operation dictionary, physical diagram and topological diagram to transform the cloud ERP system into documents. The proposed method focused on the modular design and all stages adopted a unified visual meta-model. According to the requirements of modeling, the ERP model based on SaaS was successfully established on the Enterprise Architect (EA) platform by the proposed method and the effectiveness was verified. The theoretical analysis and modeling results show that the proposed method can ensure the interoperability and consistency between models and improve the scalability of ERP system.
Reference | Related Articles | Metrics
Metropolis ray tracing based integrated filter
WU Xi, XU Qing, BU Hongjuan, WANG Zheng
Journal of Computer Applications    2016, 36 (9): 2605-2608.   DOI: 10.11772/j.issn.1001-9081.2016.09.2605
Abstract360)      PDF (658KB)(248)       Save
The Monte Carlo method is the basis of calculating global illumination. Many Monte Carlo-based global illumination algorithms have been proposed. However, most of them have some limitations in terms of rendering time. Based on the Monte Carlo method, a new global illumination algorithm was proposed, combining the Metropolis ray tracing algorithm with an integrated filter. The algorithm is composed of two parts. In the first part, multiple sets of filters with different scales were used to smooth the image; in the second part, filtered images were combined into the final result. Relative Mean Squared Error (RMSE) was used as a basis for the selection of filtering scale, and an appropriate filter was adaptively selected for each pixel during the process of sampling and reconstruction, aiming to reduce the errors to a minimum degree and gain better reconstruction results. Experimental results show that the proposed method outperforms many traditional Metropolis algorithms in terms of both efficiency and image quality.
Reference | Related Articles | Metrics
IMTP: a privacy protection mechanism for MIPv6 identity and moving trajectory
WU Huiting, WANG Zhenxing, ZHANG Liancheng, KONG Yazhou
Journal of Computer Applications    2016, 36 (8): 2236-2240.   DOI: 10.11772/j.issn.1001-9081.2016.08.2236
Abstract426)      PDF (874KB)(308)       Save
Nowadays, privacy protection for identity and trajectory has been a hot point in research and application field of Mobile IPv6 (MIPv6). Targeting on the problem that the mobile message and application data of mobile node suffers from malicious data analysis to expose its identity and to be located and tracked, an MIPv6 address privacy protection mechanism named IMTP was proposed, which supports hidden identity and prevents location tracking. In the first place, by applying self-defining mobile message option Encryptedword and making XOR transformation with home address, IMTP achieved the privacy protection of MIPv6 node identity. In the second place, by means of the mutual authentication technique among any nodes, this mechanism completed the randomly appointing of location proxy and hided the care of address of mobile node, thus to realize the privacy protection of MIPv6 node trajectory. The result of simulation indicates that IMTP has the higher quality of privacy protection and low resource cost. Meanwhile, it not only modifies a little of the standard MIPv6 protocol and well supports routing optimization, but also possesses flexible deployment, strong scalability and other advantages. The dual privacy protection for identity and trajectory provided by IMTP will be benefit to reduce the probability that specific mobile node communication data would be intercepted, thus to guarantee the communication security among the mobile nodes.
Reference | Related Articles | Metrics
Regularized neighborhood preserving embedding algorithm based on QR decomposition
ZHAI Dongling, WANG Zhengqun, XU Chunlin
Journal of Computer Applications    2016, 36 (6): 1624-1629.   DOI: 10.11772/j.issn.1001-9081.2016.06.1624
Abstract510)      PDF (921KB)(325)       Save
The estimation of the low-dimensional subspace data may have serious deviation under lacking of the training samples. In order to solve the problem, a novel regularized neighborhood preserving embedding algorithm based on QR decomposition was proposed. Firstly, a local Laplace matrix was defined to preserve local structure of the original data. Secondly, the eigen spectrum space of within-class scatter matrix was divided into three subspaces, the new eigenvector space was obtained by inverse spectrum model defined weight function and then the preprocess of the high-dimensional data was achieved. Finally, a neighborhood preserving adjacency matrix was defined, the projection matrix obtained by QR decomposition and the nearest neighbor classifier were selected for face recognition. Compared with the Regularized Generalized Discriminant Locality Preserving Projection (RGDLPP) algorithm, the recognition accuracy rate of the proposed method was respectively increased by 2 percentage points, 1.5 percentage points, 1.5 percentage points and 2 percentage points on ORL, Yale, FERET and PIE database. The experimental results show that the proposed algorithm is easy to implement and has high recognition rate relatively under Small Sample Size (SSS).
Reference | Related Articles | Metrics
Rock classification of multi-feature fusion based on collaborative representation
LIU Juexian, TENG Qizhi, WANG Zhengyong, HE Xiaohai
Journal of Computer Applications    2016, 36 (3): 854-858.   DOI: 10.11772/j.issn.1001-9081.2016.03.854
Abstract482)      PDF (754KB)(453)       Save
To solve the issues of time-consuming and low recognition rate in the traditional component analysis of rock slices, a method of component analysis of rock slices based on Collaborative Representation (CR) was proposed. Firstly, texture feature of grain in rock slices was discussed, and the way of combining Hierarchical Multi-scale Local Binary Pattern (HMLBP) and Gray Level Co-occurrence Matrix (GLCM) was proved to characterize the texture of grain in rock slices well. Then, in order to reduce the time complexity of classification, the dimension of new features was reduced to 100 by using Principal Component Analysis (PCA). Finally, the Collaborative Representation based Classification (CRC) was used as the classifier. Differ to Sparse Representation based Classification (SRC), prediction samples were encoded by all the samples in train dictionary collaboratively instead of some single sample alone. Same attributes of different samples can improve the recognition rate. The experimental results show that the recognition speed of the method increases by 300% and the recognition rate of the method increases by 2% compared to SRC. In practical application, it can distinguish quartz and feldspar components in rock slices well.
Reference | Related Articles | Metrics
Real-time face pose estimation system based on 3D face model on Android mobile platform
WANG Haipeng, WANG Zhengliang, XU Weiwei, FAN Ran
Journal of Computer Applications    2015, 35 (8): 2321-2326.   DOI: 10.11772/j.issn.1001-9081.2015.08.2321
Abstract923)      PDF (926KB)(462)       Save

Concerning that the high performance requirement of face pose estimation system which could not run on mobile phone in real time, a real-time face pose estimation system was realized for Android mobile phone terminals. First of all, one positive face image and one face image with a certain offset angle were obtained by the camera for establishing a simple 3D face model by Structure from Motion (SfM) algorithm. Secondly, the system extracted corresponding feature points from the real-time face image to 3D face model. The 3D face pose parameters were got by POSIT (Pose from Orthography and Scaling with ITeration) algorithm. At last, the 3D face model was displayed on Android mobile terminals in real-time using OpenGL (Open Graphics Library). The experimental results showed that the speed of detecting and displaying the face pose was up to 20 frame/s in the real-time video, which is close to 3D face pose estimation algorithm based on the affine correspondance on computer terminals; and the speed of detecting a large number of image sequences reached 50 frame/s. The results indicate that the system can satisfy the performance requirement for Android mobile phone terminals and real-time requirement of detecting the face pose.

Reference | Related Articles | Metrics
Energy-efficient strategy of distributed file system based on data block clustering storage
WANG Zhengying, YU Jiong, YING Changtian, LU Liang
Journal of Computer Applications    2015, 35 (2): 378-382.   DOI: 10.11772/j.issn.1001-9081.2015.02.0378
Abstract468)      PDF (766KB)(384)       Save

Concerning the low server utilization and complicated energy management caused by block random placement strategy in distributed file systems, the vector of the visiting feature on data block was built to depict the behavior of the random block accessing. K-means algorithm was adopted to do the clustering calculation according to the calculation result, then the datanodes were divided into multiple regions to store different cluster data blocks. The data blocks were dynamic reconfigured according to the clustering calculation results when the system load is low. The unnecessary datanodes could sleep to reduce the energy consumption. The flexible set of distance parameters between clusters made the strategy be suitable for different scenarios that has different requests for the energy consumption and utilization. Compared with hot-cold zoning strategies, the mathematical analysis and experimental results prove that the proposed method has a higher energy saving efficiency, the energy consumption reduces by 35% to 38%.

Reference | Related Articles | Metrics
Wavelet thresholding method based on genetic optimization function curve for ECG noise removal
WANG Zheng HE Hong TAN Yonghong
Journal of Computer Applications    2014, 34 (9): 2600-2603.   DOI: 10.11772/j.issn.1001-9081.2014.09.2600
Abstract257)      PDF (641KB)(447)       Save

In order to overcome the oscillation caused by hard threshold wavelet filtering and the waveform distortion brought by soft threshold wavelet filtering, a wavelet threshold de-noising method based on genetic optimization function curve named GOCWT was proposed. In the GOCWT, a quadratic function was used to simulate the optimal threshold function curve. The Root Mean Square Error (RMSE) and smoothness of the reconstructed signal were applied to design the fitness function. Furthermore, the Genetic Algorithm (GA) was utilized to optimize the parameters of the new thresholding function. Through the analysis of 48 segments of ECG signals, it was found that the new method resulted in a 36% increase of smoothness value comparing to the hard threshold method, and a 32% decrease of RMSE value comparing to the soft threshold method. The results show that the proposed algorithm outperforms hard threshold wavelet filtering and soft threshold wavelet filtering, it can not only avoid the undesirable oscillation phenomenon of the filtered signal, but also reserve the minute features of the signal including peak value.

Reference | Related Articles | Metrics
Energy-efficient strategy for disks in RAMCloud
LU Liang YU Jiong YING Changtian WANG Zhengying LIU Jiankuang
Journal of Computer Applications    2014, 34 (9): 2518-2522.   DOI: 10.11772/j.issn.1001-9081.2014.09.2518
Abstract168)      PDF (777KB)(356)       Save

The emergence of RAMCloud has improved user experience of Online Data-Intensive (OLDI) applications. However, its energy consumption is higher than traditional cloud data centers. An energy-efficient strategy for disks under this architecture was put forward to solve this problem. Firstly, the fitness function and roulette wheel selection which belong to genetic algorithm were introduced to choose those energy-saving disks to implement persistent data backup; secondly, reasonable buffer size was needed to extend average continuous idle time of disks, so that some of them could be put into standby during their idle time. The simulation experimental results show that the proposed strategy can effectively save energy by about 12.69% in a given RAMCloud system with 50 servers. The buffer size has double impacts on energy-saving effect and data availability, which must be weighed.

Reference | Related Articles | Metrics
Energy-efficient strategy for dynamic management of cloud storage replica based on user visiting characteristic
WANG Zhengying YU Jiong YING Changtian LU Liang BAN Aiqin
Journal of Computer Applications    2014, 34 (8): 2256-2259.   DOI: 10.11772/j.issn.1001-9081.2014.08.2256
Abstract314)      PDF (793KB)(504)       Save

For low server utilization and serious energy consumption waste problems in cloud computing environment, an energy-efficient strategy for dynamic management of cloud storage replica based on user visiting characteristic was put forward. Through transforming the study of the user visiting characteristics into calculating the visiting temperature of Block, DataNode actively applied for sleeping so as to achieve the goal of energy saving according to the global visiting temperature.The dormant application and dormancy verifying algorithm was given in detail, and the strategy concerning how to deal with the visit during DataNode dormancy was described explicitly. The experimental results show that after adopting this strategy, 29%-42% DataNode can sleep, energy consumption reduces by 31%, and server response time is well. The performance analysis show that the proposed strategy can effectively reduce the energy consumption while guaranteeing the data availability.

Reference | Related Articles | Metrics
Method of IPv6 neighbor cache protection based on improved reversed detection
KONG Yazhou WANG Zhenxing WANG Yu ZHANG Liancheng
Journal of Computer Applications    2014, 34 (4): 950-954.   DOI: 10.11772/j.issn.1001-9081.2014.04.0950
Abstract442)      PDF (751KB)(353)       Save

IPv6 Neighbor Cache (NC) was very vulnerable to be attacked, therefore, an improved method named Reversed Detection Plus (RD+) was proposed. Timestamp and sequence were firstly introduced to limit strict time of response and response matching respectively; RD+ queue was defined to store timestamp and sequence, and Random Early Detection Based on Timestamp (RED-T) algorithm was designed to prevent Denial of Service (DoS) attacks. The experimental results show that RD+ can effectively protect IPv6 NC to resist spoofing and DoS attacks, and compared with Heuristic and Explicit (HE) and Secure Neighbor Discovery (SEND), RD+ has a low consumption of resources.

Reference | Related Articles | Metrics
Anti-money laundering management model of central bank high-value payment system based on limited information fusion
WANG Zheng PENG Jialing FU Lili ZHANG Jialing
Journal of Computer Applications    2014, 34 (3): 869-872.   DOI: 10.11772/j.issn.1001-9081.2014.03.0869
Abstract528)      PDF (774KB)(424)       Save

To deal with the problem of inter-bank money laundering, combined with limited information management methods, a new anti-money laundering model was presented with central bank High-Value Payment System (HVPS) architecture. The proposed model utilized distributed monitor nodes to trace money laundering crimes. And it used event description method to record the crime procedures and so on. A new grey relational information fusion algorithm was invented to integrate multi-monitor information. And an improved power spectral algorithm was proposed to deal with fast data analysis and money laundering recognition operations. The simulation results show that the model has better processing performance and anti-money laundering recognition accuracy than others do. In detail, the model does well in money-laundering client coverage (by 12%), discovery rates (by 12%) and recall rates (by 5%).

Related Articles | Metrics
Runtime error site analysis tool based on variable tracking
ZHANG Tianjiong WANG Zheng
Journal of Computer Applications    2014, 34 (3): 857-860.   DOI: 10.11772/j.issn.1001-9081.2014.03.0857
Abstract414)      PDF (574KB)(433)       Save

A runtime error is generated in the course of the program's dynamic execution. When the error occurred, it needs to use traditional debug tools to analyze the cause of the error.For the real execution environment of some exception and multi-thread can not be reproduced, the traditional debug analysis means is not obvious. If the variable information can be captured during the program execution, the runtime error site will be caught, which is used as a basis for analysis of the cause of the error. In this paper, the technology of capture runtime error site based on variable tracking was proposed; it can capture specific variable information according to user needs, and effectively improved the flexibility of access to variable information. Based on it, a tool named Runtime Fault Site Analysis (RFST) was implemented, which could be used to analyze error cause and provide error site and aided analysis approach as well.

Related Articles | Metrics
Clustered data collection framework based on time series prediction model
WANG Zhenglu WANG Jun CHENG Yong
Journal of Computer Applications    2014, 34 (10): 2766-2770.   DOI: 10.11772/j.issn.1001-9081.2014.10.2766
Abstract301)      PDF (741KB)(413)       Save

Due to the space-time continuity of the physical attributes, such as temperature and illumination, high spatio-temporal correlation exists among the sensed data in the high-density Wireless Sensor Network (WSN). The data redundancy produced by the correlation brings heavy burden to network communication and shortens the networks lifetime. A Clustered Data Collection Framework (CDCF) based on prediction model was proposed to explore the data correlation and reduce the network traffic. The framework included a time series prediction model based on curve fitting least square method and an efficient error control strategy. In the process of data collection, the clustered structure considered the spatial correlation, and the time series prediction model investigated the temporal correlation existing in sensed data. The experimental simulation proves that CDCF used only 10%—20% of the amount of raw data to finish the data collection of the networks in the relatively stable environment, and the error of the data restored in sink is less than the threshold value which defined by user.

Reference | Related Articles | Metrics
Parallel Delaunay algorithm design in lunar surface terrain reconstruction system
WANG Zhe GAO Sanhong ZHENG Huiying LI Lichun
Journal of Computer Applications    2013, 33 (08): 2177-2183.  
Abstract741)      PDF (1078KB)(581)       Save
Triangulation procedure is one of the time bottle-necks of 3D reconstruction system. To increase the speed of triangulation procedure, a parallel Delaunay algorithm was designed based on a shared memory multi-core computer. The algorithm employed divide-and-conquer method and improved conquer procedure and Delaunay mesh optimization procedure to avoid data competition problem. Experiments were conducted on datasets with range from 500000 to 5000000 gathered on the lunar surface simulation ground, and the speedup of the algorithm reached 6.44. In addition, the algorithm complexity and parallel efficiency were fully analyzed and the algorithm was applied in the lunar surface terrain reconstruction system to realize fast virtual terrain reconstruction.
Reference | Related Articles | Metrics
Dimensionality reduction algorithm of local marginal Fisher analysis based on Mahalanobis distance
LI Feng WANG Zhengqun XU Chunlin ZHOU Zhongxia XUE Wei
Journal of Computer Applications    2013, 33 (07): 1930-1934.   DOI: 10.11772/j.issn.1001-9081.2013.07.1930
Abstract762)      PDF (778KB)(514)       Save
Considering high dimensional data image in face recognition application and Euclidean distance cannot accurately reflect the similarity between samples, a Mahalanobis distance based Local Marginal Fisher Analysis (MLMFA) dimensionality reduction algorithm was proposed. A Mahalanobis distance could be ascertained from the existing samples. Then, the Mahalanobis distance was used to choose neighbors and to reduce the dimensionality of new samples. Meanwhile, to describe the intra-class compactness and the inter-class separability, intra-class “similarity” graph and inter-class “penalty” graph were constructed by using Mahalanobis distance, and local structure of data set was preserved well. With the proposed algorithm being conducted on YALE and FERET, MLMFA outperforms the algorithms based on traditional Euclidean distance with maximum average recognition rate by 1.03% and 6% respectively. The results demonstrate that the proposed algorithm has very good classification and recognition performance.
Reference | Related Articles | Metrics